Wasserstein Dictionary Learning: Optimal Transport-Based Unsupervised Nonlinear Dictionary Learning

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

SUPPLEMENTARY MATERIALS: Wasserstein Dictionary Learning: Optimal Transport-based unsupervised non-linear dictionary learning∗

∗Part of this work was presented as a conference proceeding [SM1]. †Astrophysics Department, IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France ([email protected]) Université Paris-Diderot, AIM, Sorbonne Paris Cité, CEA, CNRS, F-91191 Gif-sur-Yvette, France ‡Université de Lyon, CNRS/LIRIS, Lyon, France §LIST, Data Analysis Tools Laboratory, CEA Saclay, France ¶Centre de Rech...

متن کامل

Wasserstein Dictionary Learning: Optimal Transport-based unsupervised non-linear dictionary learning

This paper introduces a new nonlinear dictionary learning method for histograms in the probability simplex. The method leverages optimal transport theory, in the sense that our aim is to reconstruct histograms using so-called displacement interpolations (a.k.a. Wasserstein barycenters) between dictionary atoms; such atoms are themselves synthetic histograms in the probability simplex. Our metho...

متن کامل

Speech Enhancement using Adaptive Data-Based Dictionary Learning

In this paper, a speech enhancement method based on sparse representation of data frames has been presented. Speech enhancement is one of the most applicable areas in different signal processing fields. The objective of a speech enhancement system is improvement of either intelligibility or quality of the speech signals. This process is carried out using the speech signal processing techniques ...

متن کامل

Incremental Dictionary Learning for Unsupervised Domain Adaptation

Domain adaptation (DA) methods attempt to solve the domain mismatch problem between source and target data. In this paper, we propose an incremental dictionary learning method where some target data called supportive samples are selected to assist adaptation. The idea is partially inspired by the bootstrapping-based methods [1, 3], which choose from the target domain some samples and add them i...

متن کامل

Fast Dictionary Learning with a Smoothed Wasserstein Loss

We consider in this paper the dictionary learning problem when the observations are normalized histograms of features. This problem can be tackled using non-negative matrix factorization approaches, using typically Euclidean or Kullback-Leibler fitting errors. Because these fitting errors are separable and treat each feature on equal footing, they are blind to any similarity the features may sh...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Imaging Sciences

سال: 2018

ISSN: 1936-4954

DOI: 10.1137/17m1140431